118 research outputs found

    Articulated Pose Estimation by a Graphical Model with Image Dependent Pairwise Relations

    Full text link
    We present a method for estimating articulated human pose from a single static image based on a graphical model with novel pairwise relations that make adaptive use of local image measurements. More precisely, we specify a graphical model for human pose which exploits the fact the local image measurements can be used both to detect parts (or joints) and also to predict the spatial relationships between them (Image Dependent Pairwise Relations). These spatial relationships are represented by a mixture model. We use Deep Convolutional Neural Networks (DCNNs) to learn conditional probabilities for the presence of parts and their spatial relationships within image patches. Hence our model combines the representational flexibility of graphical models with the efficiency and statistical power of DCNNs. Our method significantly outperforms the state of the art methods on the LSP and FLIC datasets and also performs very well on the Buffy dataset without any training.Comment: NIPS 2014 Camera Read

    Parsing Occluded People by Flexible Compositions

    Get PDF
    This paper presents an approach to parsing humans when there is significant occlusion. We model humans using a graphical model which has a tree structure building on recent work [32, 6] and exploit the connectivity prior that, even in presence of occlusion, the visible nodes form a connected subtree of the graphical model. We call each connected subtree a flexible composition of object parts. This involves a novel method for learning occlusion cues. During inference we need to search over a mixture of different flexible models. By exploiting part sharing, we show that this inference can be done extremely efficiently requiring only twice as many computations as searching for the entire object (i.e., not modeling occlusion). We evaluate our model on the standard benchmarked "We Are Family" Stickmen dataset and obtain significant performance improvements over the best alternative algorithms.Comment: CVPR 15 Camera Read

    Joint Multi-Person Pose Estimation and Semantic Part Segmentation

    Full text link
    Human pose estimation and semantic part segmentation are two complementary tasks in computer vision. In this paper, we propose to solve the two tasks jointly for natural multi-person images, in which the estimated pose provides object-level shape prior to regularize part segments while the part-level segments constrain the variation of pose locations. Specifically, we first train two fully convolutional neural networks (FCNs), namely Pose FCN and Part FCN, to provide initial estimation of pose joint potential and semantic part potential. Then, to refine pose joint location, the two types of potentials are fused with a fully-connected conditional random field (FCRF), where a novel segment-joint smoothness term is used to encourage semantic and spatial consistency between parts and joints. To refine part segments, the refined pose and the original part potential are integrated through a Part FCN, where the skeleton feature from pose serves as additional regularization cues for part segments. Finally, to reduce the complexity of the FCRF, we induce human detection boxes and infer the graph inside each box, making the inference forty times faster. Since there's no dataset that contains both part segments and pose labels, we extend the PASCAL VOC part dataset with human pose joints and perform extensive experiments to compare our method against several most recent strategies. We show that on this dataset our algorithm surpasses competing methods by a large margin in both tasks.Comment: This paper has been accepted by CVPR 201

    Detect What You Can: Detecting and Representing Objects using Holistic Models and Body Parts

    Get PDF
    Detecting objects becomes difficult when we need to deal with large shape deformation, occlusion and low resolution. We propose a novel approach to i) handle large deformations and partial occlusions in animals (as examples of highly deformable objects), ii) describe them in terms of body parts, and iii) detect them when their body parts are hard to detect (e.g., animals depicted at low resolution). We represent the holistic object and body parts separately and use a fully connected model to arrange templates for the holistic object and body parts. Our model automatically decouples the holistic object or body parts from the model when they are hard to detect. This enables us to represent a large number of holistic object and body part combinations to better deal with different “detectability” patterns caused by deformations, occlusion and/or low resolution. We apply our method to the six animal categories in the PASCAL VOC dataset and show that our method significantly improves state-of-the-art (by 4.1% AP) and provides a richer representation for objects. During training we use annotations for body parts (e.g., head, torso, etc), making use of a new dataset of fully annotated object parts for PASCAL VOC 2010, which provides a mask for each part.This material is based upon work supported by the Center for Minds, Brains and Machines (CBMM), funded by NSF STC award CCF-1231216

    Uncertainty-informed Mutual Learning for Joint Medical Image Classification and Segmentation

    Full text link
    Classification and segmentation are crucial in medical image analysis as they enable accurate diagnosis and disease monitoring. However, current methods often prioritize the mutual learning features and shared model parameters, while neglecting the reliability of features and performances. In this paper, we propose a novel Uncertainty-informed Mutual Learning (UML) framework for reliable and interpretable medical image analysis. Our UML introduces reliability to joint classification and segmentation tasks, leveraging mutual learning with uncertainty to improve performance. To achieve this, we first use evidential deep learning to provide image-level and pixel-wise confidences. Then, an Uncertainty Navigator Decoder is constructed for better using mutual features and generating segmentation results. Besides, an Uncertainty Instructor is proposed to screen reliable masks for classification. Overall, UML could produce confidence estimation in features and performance for each link (classification and segmentation). The experiments on the public datasets demonstrate that our UML outperforms existing methods in terms of both accuracy and robustness. Our UML has the potential to explore the development of more reliable and explainable medical image analysis models. We will release the codes for reproduction after acceptance.Comment: 13 page

    A SVM-based method for identifying fracture modes of rock using WVD spectrogram features of AE signals

    Get PDF
    In order to achieve the highly efficient and accurate identification of fracture modes including tension or shear fractures during rock failure, an intelligent identification method based on Wigner-Ville distribution (WVD) spectrogram features of acoustic emission (AE) signals was proposed. This method was mainly constructed by the following steps: Firstly, AE hits corre-sponding to tension and shear fractures were obtained through conducting the Brazilian disc test (tension fracture) and direct shear test (shear fracture) of limestone. Secondly, the WVD spectro-grams of these tensile-type and shear-type AE hits were respectively extracted and then trans-formed into the image features of relatively low-dimension as the sample set based on the gray-level cooccurrence matrix (GLCM) and histogram of oriented gradient (HOG). Finally, on the basis of the processed and classified sample set of the WVD spectrogram features, an identifica-tion model of rock fracture modes was established by a support vector machine (SVM) learning algorithm. To verify this method, the fracture modes of limestone subjected to biaxial compres-sion were identified by the method. The results showed that the method not only can greatly re-veal the fracture modes change from tension-dominated to shear-dominated fractures, but also has advantages over the RA-AF value method, such as applicability, accuracy and practicality

    Novi VP2/VP3 rekombinantni senekavirus A izoliran u sjevernoj Kini

    Get PDF
    Senecavirus A (SVA), previously called the Seneca Valley virus, is the only member of the genus Senecavirus within the family Picornaviridae. This virus was discovered as a serendipitous finding in 2002 and named Seneca Valley virus 001 (SVV-001). SVA is an emerging pathogen that can cause vesicular lesions and epidemic transient neonatal a sharp decline in swine. In this study, an SVA strain was isolated from a pig herd in Shandong Province in China and identified as SVA-CH-SDFX-2022. The full-length genome was 7282 nucleotides (nt) in length and contained a single open reading frame (ORF), excluding the poly (A) tails of the SVA isolates. Phylogenetic analysis showed that the isolate shares its genomic organization, resembling and sharing high nucleotide identities of 90.5% to 99.6%, with other previously reported SVA isolates. The strain was proved by in vitro characterization and the results demonstrate that the virus has robust growth ability in vitro. The recombination event of the SVA-CH-SDFX-2022 isolate was found and occurred between nts 1836 and 2710, which included the region of the VP2 (partial), and VP3 (partial) genes. It shows the importance of faster vaccine development and a better understanding of virus infection and spread because of increased infection rates and huge economic losses. This novel incursion has substantial implications for the regional control of vesicular transboundary diseases, and will be available for further study of the epidemiology of porcine SVA. Our findings provide useful data for studying SVA in pigs.Senekavirus A (SVA), prije nazivan virusom doline Seneca Valley, jedini je pripadnik roda senekavirusa u porodici Picornaviridae. Virus je slučajno otkriven 2002. i nazvan virusom doline Seneca 001 (SVV-001). SVA je novi patogen koji može uzrokovati vezikularne lezije i prolaznu epidemiju novorođene prasadi s naglim gubicima u proizvodnji. U ovom je istraživanju soj SVA izoliran u populaciji svinja iz provincije Shandong u Kini i identificiran kao SVA-CHSDFX-2022. Kompletni genom izolata SVA imao je 7282 nukleotida (nt) u dužini i sadržavao je jedan otvoreni okvir za očitavanje (ORF), bez poli-A repova. Filogenetska je analiza pokazala da izolat u velikoj mjeri sadržava genomsku organizaciju i nukleotidne identitete, od 90,5 % do 99,6 %, s drugim poznatim SVA izolatima. Karakterizacija virusa je pokazala da ima veliku sposobnost rasta in vitro. Pronađena je rekombinacija izolata SVA-CH-SDFX-između nukleotida 1836 i 2710 što je uključilo regiju gena VP2 (parcijalno) i gena VP3 (parcijalno). Zbog visoke stope infektivnosti i golemih ekonomskih gubitaka važan je brži razvoj cjepiva i bolje razumijevanje zaraze. Rezultati ovog istraživanja pružaju korisne podatke za proučavanje SVA virusa, posebno s obzirom na njegovu epidemiologiju u svinja i regionalnu prekograničnu kontrolu vezikularnih bolesti
    corecore